Goto

Collaborating Authors

 biological neuron


NSPDI-SNN: An efficient lightweight SNN based on nonlinear synaptic pruning and dendritic integration

Cai, Wuque, Sun, Hongze, He, Jiayi, Liao, Qianqian, Zang, Yunliang, Chen, Duo, Yao, Dezhong, Guo, Daqing

arXiv.org Artificial Intelligence

Spiking neural networks (SNNs) are artificial neural networks based on simulated biological neurons and have attracted much attention in recent artificial intelligence technology studies. The dendrites in biological neurons have efficient information processing ability and computational power; however, the neurons of SNNs rarely match the complex structure of the dendrites. Inspired by the nonlinear structure and highly sparse properties of neuronal dendrites, in this study, we propose an efficient, lightweight SNN method with nonlinear pruning and dendritic integration (NSPDI-SNN). In this method, we introduce nonlinear dendritic integration (NDI) to improve the representation of the spatiotemporal information of neurons. We implement heterogeneous state transition ratios of dendritic spines and construct a new and flexible nonlinear synaptic pruning (NSP) method to achieve the high sparsity of SNN. We conducted systematic experiments on three benchmark datasets (DVS128 Gesture, CIFAR10-DVS, and CIFAR10) and extended the evaluation to two complex tasks (speech recognition and reinforcement learning-based maze navigation task). Across all tasks, NSPDI-SNN consistently achieved high sparsity with minimal performance degradation. In particular, our method achieved the best experimental results on all three event stream datasets. Further analysis showed that NSPDI significantly improved the efficiency of synaptic information transfer as sparsity increased. In conclusion, our results indicate that the complex structure and nonlinear computation of neuronal dendrites provide a promising approach for developing efficient SNN methods.


The GAIN Model: A Nature-Inspired Neural Network Framework Based on an Adaptation of the Izhikevich Model

Hooper, Gage K. R.

arXiv.org Artificial Intelligence

The GAIN Model: A Nature - Inspired Neural Network Framework Based on an Adaptation of the Izhikevich Model Gage K. R. Hooper Independent Researcher Future Aerospace Engineering Student, Embry - Riddle Aeronautical University May 3 1, 2025 1 Abstract While many neural networks focus on layers to process information, the GAIN model uses a grid - based structure to improve biological plausibility and the dynamics of the model. The grid structure helps neurons to interact with their closest neighbors and im prove their connections with one another, which is seen in biological neurons. While also being implemented with the Izhikevich model this approach allows for a computationally efficient and biologically accurate simulation that can aid in the development of neural networks, large scale simulations, and the development in the neuroscience field. This adaptation of the Izhikevich model can improve the dynamics and accuracy of the model, allowing for its uses to be specialized but efficient. Early models of SSNs, such as the Hodgkin - Huxley model (1952), were detailed and capable of replicating the exact dynamics of neuronal spiking, considering every ion channel, but it was too computationally inefficie nt. A computational model that can simulate the function of neurons. The activation of neurons determined by its action potential when a neuron's difference between interior and exterior voltages (membrane potential) rapidly increases and decreases. In response to limitations seen in these models, Eugene Izhikevich (2003) introduced a spiking neural network model, achieving a balance between biological plausibility and computational efficiency (See Appendix A). The Izhikevich model can reproduce neuron behaviors while remaining computationally lightweight, resulting in it being widely adopted for large - scale simulations.


Exploring the Performance of Perforated Backpropagation through Further Experiments

Brenner, Rorry, Davis, Evan, Chaudhari, Rushi, Morse, Rowan, Chen, Jingyao, Liu, Xirui, You, Zhaoyi, Itti, Laurent

arXiv.org Artificial Intelligence

Perforated Backpropagation is a neural network optimization technique based on modern understanding of the computational importance of dendrites within biological neurons. This paper explores further experiments from the original publication, generated from a hackathon held at the Carnegie Mellon Swartz C enter in February 2025. Students and local Pittsburgh ML practitioners were brought together to experiment with the Perforated Backpropagation algorithm on the datasets and models which they were using for their projects. Results showed that the system could enhance their projects, with up to 90% model compression without negative impact on accuracy, or up to 16% increased accuracy of their original models.


Analysis of Hopfield Model as Associative Memory

Silvestri, Matteo

arXiv.org Artificial Intelligence

In this section, we won't delve into the depths of neuroscience, but rather aim to illuminate a fundamental concept: action potential. Understanding the behavior of neurons and the process that gives rise to a spike lays a crucial foundation. This rudimentary insight serves as a key building block, enriching our comprehension of the Neural Network models and its mathematical underpinnings. The action potential AP, a pivotal concept in neuronal function, is a neuronal phenomenon in which we see the neuron fires. Transmission of a neuronal signal is entirely dependent of the movement of ions, such as Sodium (Na+), Potassium (K+) and Chloride (CI), that are unequally distributed between the inside and the outside of the cell body. The presence and the movement of these ions creates a chemical gradient across the membrane which we define as electro-chemical gradient ECG .


Neuroscience inspired scientific machine learning (Part-1): Variable spiking neuron for regression

Garg, Shailesh, Chakraborty, Souvik

arXiv.org Artificial Intelligence

Redundant information transfer in a neural network can increase the complexity of the deep learning model, thus increasing its power consumption. We introduce in this paper a novel spiking neuron, termed Variable Spiking Neuron (VSN), which can reduce the redundant firing using lessons from biological neuron inspired Leaky Integrate and Fire Spiking Neurons (LIF-SN). The proposed VSN blends LIF-SN and artificial neurons. It garners the advantage of intermittent firing from the LIF-SN and utilizes the advantage of continuous activation from the artificial neuron. This property of the proposed VSN makes it suitable for regression tasks, which is a weak point for the vanilla spiking neurons, all while keeping the energy budget low. The proposed VSN is tested against both classification and regression tasks. The results produced advocate favorably towards the efficacy of the proposed spiking neuron, particularly for regression tasks.


Model of a Biological Neuron as a Temporal Neural Network

Neural Information Processing Systems

A biological neuron can be viewed as a device that maps a multidimen(cid:173) sional temporal event signal (dendritic postsynaptic activations) into a unidimensional temporal event signal (action potentials). We have designed a network, the Spatio-Temporal Event Mapping (STEM) architecture, which can learn to perform this mapping for arbitrary bio(cid:173) physical models of neurons. Such a network appropriately trained, called a STEM cell, can be used in place of a conventional compartmen(cid:173) tal model in simulations where only the transfer function is important, such as network simulations. The STEM cell offers advantages over compartmental models in terms of computational efficiency, analytical tractabili1ty, and as a framework for VLSI implementations of biologi(cid:173) cal neurons.


The Inescapable Conclusion: Machine Learning Is Not Like Your Brain - KDnuggets

#artificialintelligence

The final article in this nine-part series summarizes the many reasons why Machine Learning is not like your brain - along with a few similarities. Hopefully, these articles have helped to explain the capabilities and limitations of biological neurons, how these relate to ML, and ultimately what will be needed to replicate the contextual knowledge of the human brain, enabling AI to attain true intelligence and understanding. In examining Machine Learning and the biological brain, the inescapable conclusion is that ML is not very much like a brain at all. In fact, the only similarity is that a neural network consists of things called neurons connected by things called synapses. Otherwise, the signals are different, the timescale is different, and the algorithms of ML are impossible in biological neurons for a number of reasons.


Efficiency Spells the Difference Between Biological Neurons and Their Artificial Counterparts - KDnuggets

#artificialintelligence

Machine learning has made great advances, but as this series has discussed, doesn't have much in common with the way your brain works. Part 8 of the series explores a single facet of biological neurons which, so far, have kept them way ahead of their artificial counterparts: their efficiency. Your brain contains about 86 billion neurons, which are crammed into a volume of somewhat over one liter. Although machine learning can do many things which the human brain cannot, the brain is able to perform continuous speech recognition, visual interpretation, and a host of other things, all while dissipating about 12 watts. In comparison, my laptop draws about 65 watts and my desktop machine draws over 200 watts, and neither of them is capable of running the huge ML networks which are in use today.


Making computer chips act more like brain cells

Stanford Engineering

The human brain is an amazing computing machine. Weighing only three pounds or so, it can process information a thousand times faster than the fastest supercomputer, store a thousand times more information than a powerful laptop, and do it all using no more energy than a 20-watt lightbulb. Researchers are trying to replicate this success using soft, flexible organic materials that can operate like biological neurons and someday might even be able to interconnect with them. Eventually, soft "neuromorphic" computer chips could be implanted directly into the brain, allowing people to control an artificial arm or a computer monitor simply by thinking about it. Like real neurons -- but unlike conventional computer chips -- these new devices can send and receive both chemical and electrical signals.


Machine Learning Is Not Like Your Brain Part 5: Biological Neurons Can't Do Summation of Inputs - KDnuggets

#artificialintelligence

Fundamental to all ML systems is the idea that the artificial neuron, the perceptron, sums the weighted signals from the various input synapses. Admittedly, there are a few cases where this appears to work in a biological neuron model. The biological neuron has a membrane potential or voltage which I'll call its "charge." The neuron accumulates charge from its input synapses and the various synapses have "weights" corresponding to the amount of charge they contribute. The synapse weight may be positive or negative.